Immediate Reward Reinforcement Learning for Projective Kernel Methods
نویسندگان
چکیده
We extend a reinforcement learning algorithm which has previously been shown to cluster data. We have previously applied the method to unsupervised projection methods, principal component analysis, exploratory projection pursuit and canonical correlation analysis. We now show how the same methods can be used in feature spaces to perform kernel principal component analysis and kernel canonical correlation analysis.
منابع مشابه
Reinforcement learning with kernels and Gaussian processes
Kernel methods have become popular in many sub-fields of machine learning with the exception of reinforcement learning; they facilitate rich representations, and enable machine learning techniques to work in diverse input spaces. We describe a principled approach to the policy evaluation problem of reinforcement learning. We present a temporal difference (TD) learning using kernel functions. Ou...
متن کاملReinforcement Learning by Comparing Immediate Reward
This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate rewards using a variation of Q-Learning algorithm. Unlike the conventional Q-Learning, the proposed algorithm compares current reward with immediate reward of past move and work accordingly. Relative reward based Q-learning is an approach towards interactive learning. Q-Learning is a model free re...
متن کاملAn Analysis of Feature Selection and Reward Function for Model-Based Reinforcement Learning
In this paper, we propose a series of correlation-based feature selection methods for dealing with high dimensionality in feature-rich environments for modelbased Reinforcement Learning (RL). Real world RL tasks usually involve highdimensional feature spaces where standard RL methods often perform badly. Our proposed approach adopts correlation among state features as a selection criterion. The...
متن کاملEecient Exploration for Optimizing Immediate Reward
We consider the problem of learning an eeective behavior strategy from reward. Although much studied, the issue of how to use prior knowledge to scale optimal behavior learning up to real-world problems remains an important open issue. We investigate the inherent data-complexity of behavior learning when the goal is simply to optimize immediate reward. Although easier than reinforcement learnin...
متن کاملEfficient exploration for optimizing immediate reward
We consider the problem of learning an effective behavior strategy from reward. Although much studied, the issue of how to use prior knowledge to scale optimal behavior learning up to real-world problems remains an important open issue. We investigate the inherent data-complexity of behavior-learning when the goal is simply to optimize immediate reward. Although easier than reinforcement learni...
متن کامل